1,131 research outputs found

    Bayesian network based computer vision algorithm for traffic monitoring using video

    Get PDF
    This paper presents a novel approach to estimating the 3D velocity of vehicles from video. Here we propose using a Bayesian Network to classify objects into pedestrians and different types of vehicles, using 2D features extracted from the video taken from a stationary camera. The classification allows us to estimate an approximate 3D model for the different classes. The height information is then used with the image co-ordinates of the object and the camera's perspective projection matrix to estimate the objects 3D world co-ordinates and hence its 3D velocity. Accurate velocity and acceleration estimates are both very useful parameters in traffic monitoring systems. We show results of highly accurate classification and measurement of vehicle's motion from real life traffic video streams.Kumar, P.; Ranganath, S.; Weimin, H

    Framework for real time behavior interpretation from traffic video

    Get PDF
    © 2005 IEEE.Video-based surveillance systems have a wide range of applications for traffic monitoring, as they provide more information as compared to other sensors. In this paper, we present a rule-based framework for behavior and activity detection in traffic videos obtained from stationary video cameras. Moving targets are segmented from the images and tracked in real time. These are classified into different categories using a novel Bayesian network approach, which makes use of image features and image-sequence- based tracking results for robust classification. Tracking and classification results are used in a programmed context to analyze behavior. For behavior recognition, two types of interactions have mainly been considered. One is interaction between two or more mobile targets in the field of view (FoV) of the camera. The other is interaction between targets and stationary objects in the environment. The framework is based on two types of a priori information: 1) the contextual information of the camera’s FoV, in terms of the different stationary objects in the scene and 2) sets of predefined behavior scenarios, which need to be analyzed in different contexts. The system can recognize behavior from videos and give a lexical output of the detected behavior. It also is capable of handling uncertainties that arise due to errors in visual signal processing. We demonstrate successful behavior recognition results for pedestrian– vehicle interaction and vehicle–checkpost interactions.Kumar, P.; Ranganath, S.; Huang Weimin; Sengupta, K

    Cooperative multitarget tracking with efficient split and merge handling

    Get PDF
    Copyright © 2006 IEEEFor applications such as behavior recognition it is important to maintain the identity of multiple targets, while tracking them in the presence of splits and merges, or occlusion of the targets by background obstacles. Here we propose an algorithm to handle multiple splits and merges of objects based on dynamic programming and a new geometric shape matching measure. We then cooperatively combine Kalman filter-based motion and shape tracking with the efficient and novel geometric shape matching algorithm. The system is fully automatic and requires no manual input of any kind for initialization of tracking. The target track initialization problem is formulated as computation of shortest paths in a directed and attributed graph using Dijkstra's shortest path algorithm. This scheme correctly initializes multiple target tracks for tracking even in the presence of clutter and segmentation errors which may occur in detecting a target. We present results on a large number of real world image sequences, where upto 17 objects have been tracked simultaneously in real-time, despite clutter, splits, and merges in measurements of objects. The complete tracking system including segmentation of moving objects works at 25 Hz on 352times288 pixel color image sequences on a 2.8-GHz Pentium-4 workstationPankaj Kumar, Surendra Ranganath, Kuntal Sengupta, and Huang Weimi

    Via Coupling Within Power-Return Plane Structures Considering the Radiation Loss

    Get PDF
    An accurate analytical model to predict via coupling within rectangular power-return plane structures is developed. Loss mechanisms, including radiation loss, dielectric loss, and conductor loss, are considered. The radiation loss is incorporated into a complex propagating wavenumber as an artificial loss mechanism. The quality factors associated with the three loss mechanisms are calculated and compared. The effects of radiation loss on input impedances and reflection coefficients are investigated for both high-dielectric-loss and low-dielectric-loss PCBs. Measurements are performed to validate the effectiveness of the model

    Analytical Model for the Rectangular Power-ground Structure Including Radiation Loss

    Get PDF
    An accurate analytical model to predict via coupling within rectangular power-return plane structures is developed. Loss mechanisms, including radiation loss, dielectric loss, and conductor loss, are considered in this model. The radiation loss is incorporated into a complex propagating wavenumber as an artificial loss mechanism. The quality factors associated with three loss mechanisms are calculated and compared. The effects of radiation loss on input impedances and reflection coefficients are investigated for both high-dielectric-loss and low-dielectric-loss printed circuit boards. Measurements are performed to validate the effectiveness of this model

    Improved 3D thinning algorithms for skeleton extraction

    Full text link
    In this study, we focused on developing a novel 3D Thinning algorithm to extract one-voxel wide skeleton from various 3D objects aiming at preserving the topological information. The 3D Thinning algorithm was testified on computer-generated and real 3D reconstructed image sets acquired from TEMT and compared with other existing 3D Thinning algorithms. It is found that the algorithm has conserved medial axes and simultaneously topologies very well, demonstrating many advantages over the existing technologies. They are versatile, rigorous, efficient and rotation invariant.<br /

    Einstein Probe - a small mission to monitor and explore the dynamic X-ray Universe

    Full text link
    Einstein Probe is a small mission dedicated to time-domain high-energy astrophysics. Its primary goals are to discover high-energy transients and to monitor variable objects in the 0.54 0.5-4~keV X-rays, at higher sensitivity by one order of magnitude than those of the ones currently in orbit. Its wide-field imaging capability, featuring a large instantaneous field-of-view (60×6060^\circ \times60^\circ, 1.1\sim1.1sr), is achieved by using established technology of micro-pore (MPO) lobster-eye optics, thereby offering unprecedentedly high sensitivity and large Grasp. To complement this powerful monitoring ability, it also carries a narrow-field, sensitive follow-up X-ray telescope based on the same MPO technology to perform follow-up observations of newly-discovered transients. Public transient alerts will be downlinked rapidly, so as to trigger multi-wavelength follow-up observations from the world-wide community. Over three of its 97-minute orbits almost the entire night sky will be sampled, with cadences ranging from 5 to 25 times per day. The scientific objectives of the mission are: to discover otherwise quiescent black holes over all astrophysical mass scales by detecting their rare X-ray transient flares, particularly tidal disruption of stars by massive black holes at galactic centers; to detect and precisely locate the electromagnetic sources of gravitational-wave transients; to carry out systematic surveys of X-ray transients and characterize the variability of X-ray sources. Einstein Probe has been selected as a candidate mission of priority (no further selection needed) in the Space Science Programme of the Chinese Academy of Sciences, aiming for launch around 2020.Comment: accepted to publish in PoS, Proceedings of "Swift: 10 Years of Discovery" (Proceedings of Science; ed. by P. Caraveo, P. D'Avanzo, N. Gehrels and G. Tagliaferri). Minor changes in text, references update

    Urban−rural gradients reveal joint control of elevated CO₂ and temperature on extended photosynthetic seasons

    Get PDF
    Photosynthetic phenology has large effects on the land-atmosphere carbon exchange. Due to limited experimental assessments, a comprehensive understanding of the variations of photosynthetic phenology under future climate and its associated controlling factors is still missing, despite its high sensitivities to climate. Here, we develop an approach that uses cities as natural laboratories, since plants in urban areas are often exposed to higher temperatures and carbon dioxide (CO₂) concentrations, which reflect expected future environmental conditions. Using more than 880 urban-rural gradients across the Northern Hemisphere (≥30° N), combined with concurrent satellite retrievals of Sun-induced chlorophyll fluorescence (SIF) and atmospheric CO₂, we investigated the combined impacts of elevated CO₂ and temperature on photosynthetic phenology at the large scale. The results showed that, under urban conditions of elevated CO2 and temperature, vegetation photosynthetic activity began earlier (−5.6 ± 0.7 d), peaked earlier (−4.9  ± 0.9 d) and ended later (4.6 ± 0.8 d) than in neighbouring rural areas, with a striking two- to fourfold higher climate sensitivity than greenness phenology. The earlier start and peak of season were sensitive to both the enhancements of CO₂ and temperature, whereas the delayed end of season was mainly attributed to CO₂ enrichments. We used these sensitivities to project phenology shifts under four Representative Concentration Pathway climate scenarios, predicting that vegetation will have prolonged photosynthetic seasons in the coming two decades. This observation-driven study indicates that realistic urban environments, together with SIF observations, provide a promising method for studying vegetation physiology under future climate change
    corecore